Residual and Backward Error Bounds in Minimum Residual Krylov Subspace Methods
نویسندگان
چکیده
Minimum residual norm iterative methods for solving linear systems Ax = b can be viewed as, and are often implemented as, sequences of least squares problems involving Krylov subspaces of increasing dimensions. The minimum residual method (MINRES) [C. Paige and M. Saunders, SIAM J. Numer. Anal., 12 (1975), pp. 617–629] and generalized minimum residual method (GMRES) [Y. Saad and M. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869] represent typical examples. In [C. Paige and Z. Strakoš, Bounds for the least squares distance using scaled total least squares, Numer. Math., to appear] revealing upper and lower bounds on the residual norm of any linear least squares (LS) problem were derived in terms of the total least squares (TLS) correction of the corresponding scaled TLS problem. In this paper theoretical results of [C. Paige and Z. Strakoš, Bounds for the least squares distance using scaled total least squares, Numer. Math., to appear] are extended to the GMRES context. The bounds that are developed are important in theory, but they also have fundamental practical implications for the finite precision behavior of the modified Gram–Schmidt implementation of GMRES, and perhaps for other minimum norm methods.
منابع مشابه
Residual Replacement Strategies for Krylov
In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)kAkkxk. Building on earlier ideas on residual replacement and on insights in the nite precision behaviour of the Krylov subspace methods, computable error bounds are derived for it...
متن کاملResidual Replacement Strategies for Krylov Subspace Iterative Methods for the Convergence of True Residuals
In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u)‖A‖‖x‖. Building on earlier ideas on residual replacement and on insights in the finite precision behavior of the Krylov subspace methods, computable error bounds are derived for i...
متن کاملFurther analysis of minimum residual iterations
The convergence behavior of a number of algorithms based on minimizing residual norms over Krylov subspaces, is not well understood. Residual or error bounds currently available are either too loose or depend on unknown constants which can be very large. In this paper we take another look at traditional as well as alternative ways of obtaining upper bounds on residual norms. In particular, we d...
متن کاملResidual, Restarting, and Richardson Iteration for the Matrix Exponential
Abstract. A well-known problem in computing some matrix functions iteratively is a lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Assume, the matrix exponential of a given matrix times a given vector has to be computed. We interpret the sought after vector as a value of a vector function satisfying the linea...
متن کاملResidual, restarting and Richardson iteration for the matrix exponential, revised
Abstract. A well-known problem in computing some matrix functions iteratively is the lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Suppose the matrix exponential of a given matrix times a given vector has to be computed. We develop the approach of Druskin, Greenbaum and Knizhnerman (1998) and interpret the ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- SIAM J. Scientific Computing
دوره 23 شماره
صفحات -
تاریخ انتشار 2002